-
-
Notifications
You must be signed in to change notification settings - Fork 353
Add Extended Thinking support for reasoning models #552
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
## Summary
Adds support for Extended Thinking (also known as reasoning) across
Anthropic, Gemini, and OpenAI/Grok providers. This feature exposes the
model's internal reasoning process, allowing applications to access both
the thinking content and the final response.
## Usage
```ruby
chat = RubyLLM.chat(model: 'claude-opus-4-5-20251101')
.with_thinking(budget: :medium) # or :low, :high, or Integer
response = chat.ask('What is 15 * 23?')
response.thinking # => 'Let me break this down step by step...'
response.content # => 'The answer is 345.'
# Streaming with thinking
chat.ask('Solve this') do |chunk|
print chunk.thinking if chunk.thinking
print chunk.content
end
```
## Provider Support
- Anthropic: Uses thinking block with budget_tokens parameter
- Gemini 2.5: Uses thinkingConfig with thinkingBudget (tokens)
- Gemini 3: Uses thinkingConfig with thinkingLevel (low/medium/high)
- OpenAI/Grok: Uses reasoning_effort parameter (low/high)
Budget symbols (:low, :medium, :high) are translated to appropriate
provider-specific values. Integer budgets specify token counts directly.
## Changes
Core:
- Message: Added thinking and protected thinking_signature attributes
- Chat: Added with_thinking(budget:) and thinking_enabled? methods
- StreamAccumulator: Accumulates thinking content during streaming
- UnsupportedFeatureError: New error for unsupported feature requests
Providers:
- Anthropic: Full thinking support with signature for multi-turn
- Gemini: Supports both 2.5 (budget) and 3.0 (effort level) APIs
- OpenAI: Supports Grok models via reasoning_effort
- Bedrock/Mistral: Accept thinking parameter (no-op for compatibility)
ActiveRecord:
- Migration template includes thinking and thinking_signature columns
- ChatMethods: Added with_thinking delegation and persistence
- MessageMethods: Extracts thinking attributes in to_llm
Tests:
- 82 examples covering unit and integration tests
- VCR cassettes for claude-sonnet-4, claude-opus-4, claude-opus-4-5,
and gemini-2.5-flash
## Type of change
- [ ] Bug fix
- [x] New feature
- [ ] Breaking change
| payload | ||
| end | ||
|
|
||
| def grok_model?(model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit confused by this. Does OpenAI provide a model called "grok"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, but RubyLLM routes to Grok via OpenRouter, which uses openai/chat.rb.
AI explanation:
- OpenRouter inherits from OpenAI: class OpenRouter < OpenAI (line 6 in openrouter.rb)
- Grok models are via OpenRouter: "provider": "openrouter" in models.json
- No dedicated xAI provider exists
So yes, Grok API calls via OpenRouter do use openai/chat.rb because OpenRouter inherits all of OpenAI's chat logic.
The grok_model? method in openai/chat.rb is there because:
- OpenRouter uses OpenAI-compatible API format
- When a Grok model is detected, the reasoning_effort parameter is added for thinking support
The naming is technically correct but could be confusing.
We added a clarifying comment to the method.
- Fix incorrect comment that said "OpenAI" instead of "Anthropic" - Replace .present? with && !.empty? to avoid ActiveSupport dependency in core library code Generated with Claude Code (https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]>
Explains why Grok model detection exists in the OpenAI provider: Grok models are accessed via OpenRouter which inherits from OpenAI. Generated with Claude Code (https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]>
crmne
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks so much for this PR. It's a strong base and I really appreciate the work you put into it. I left a bunch of comments, mostly around separating effort and budget more cleanly, since some providers like Anthropic need both.
This feature is fairly high-priority for me, so unless you feel like you can iterate on it and land it over the weekend, I may take a pass at finalizing the implementation myself. That said, I’d love to keep this collaborative if you're up for it, feel free to push another update soon and we’ll get it over the line.
Thanks again! Great to see this coming together.
|
|
||
| ## What is Extended Thinking? | ||
|
|
||
| Extended Thinking (also known as "reasoning") is a feature that exposes the model's internal reasoning process. When enabled, models will "think through" problems step-by-step before providing their final response. This is particularly useful for: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exposes the model's internal reasoning process
Not technically true. The model you used for this PR optimized for exposing the model's internal reasoning process but really extended thinking gives the AI more time and computational "budget" to deeply analyze complex problems, break them down, plan solutions, and self-reflect before answering, significantly boosting performance on tasks like coding, math, and logic, at the expense of being slower and more costly.
It's like a multi-pass codec for video.
| ### Budget Options | ||
|
|
||
| The `budget` parameter controls how much "thinking" the model should do: | ||
|
|
||
| | Budget | Description | | ||
| |--------|-------------| | ||
| | `:low` | Minimal thinking, faster responses | | ||
| | `:medium` | Balanced thinking (default) | | ||
| | `:high` | Maximum thinking, most thorough | | ||
| | Integer | Specific token budget (provider-dependent) | | ||
|
|
||
| ```ruby | ||
| # Symbol budgets | ||
| chat.with_thinking(budget: :low) | ||
| chat.with_thinking(budget: :medium) | ||
| chat.with_thinking(budget: :high) | ||
|
|
||
| # Integer budget (tokens) | ||
| chat.with_thinking(budget: 10_000) | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here I think we're conflating effort and budget. Certain providers prefer effort (e.g. OpenAI), others prefer budget and have optional effort (e.g. Anthropic), therefore even a minimal interface should have both.
| chat.ask("Complex question here...") do |chunk| | ||
| thinking_content << chunk.thinking if chunk.thinking | ||
| response_content << chunk.content if chunk.content | ||
|
|
||
| # Update UI with separated content | ||
| update_thinking_panel(thinking_content) | ||
| update_response_panel(response_content) | ||
| end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is great design! doesn't break the current way of looping chunks.
| ### Provider-Specific Behavior | ||
|
|
||
| | Provider | Models | Implementation | | ||
| |----------|--------|----------------| | ||
| | Anthropic | claude-opus-4-*, claude-sonnet-4-* | `thinking` block with `budget_tokens` | | ||
| | Gemini | gemini-2.5-*, gemini-3-* | `thinkingConfig` with budget or effort level | | ||
| | OpenAI/Grok | grok-* models | `reasoning_effort` parameter | | ||
|
|
||
| Budget symbols are automatically translated to provider-specific values: | ||
|
|
||
| | Symbol | Anthropic | Gemini 2.5 | Gemini 3 | Grok | | ||
| |--------|-----------|------------|----------|------| | ||
| | `:low` | 1,024 tokens | 1,024 tokens | "low" | "low" | | ||
| | `:medium` | 10,000 tokens | 8,192 tokens | "medium" | "high" | | ||
| | `:high` | 32,000 tokens | 24,576 tokens | "high" | "high" | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this will need to be rewritten taking into account the comment above
| ```ruby | ||
| class AddThinkingToMessages < ActiveRecord::Migration[7.0] | ||
| def change | ||
| add_column :messages, :thinking, :text | ||
| add_column :messages, :thinking_signature, :text | ||
| end | ||
| end | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Slightly better:
add_column :chunks, :thinking_text, :text
add_column :chunks, :thinking_signature, :string| attrs[:thinking] = message.thinking if @message.has_attribute?(:thinking) | ||
| attrs[:thinking_signature] = Messages.signature_for(message) if @message.has_attribute?(:thinking_signature) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would do something like this:
class Chunk < ApplicationRecord
def thinking
return nil unless thinking_text || thinking_signature
OpenStruct.new(
text: thinking_text,
signature: thinking_signature
)
end
endso we have the exact same interface between PORO and Rails
| thinking: thinking_value, | ||
| thinking_signature: thinking_signature_value, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would make thinking it's own object:
module RubyLLM
class Thinking
attr_reader :text, :signature
def initialize(text: nil, signature: nil)
@text = text
@signature = signature
end
end
end| raise UnsupportedFeatureError, | ||
| "Model '#{@model.id}' does not support extended thinking" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove this: this goes against our current philosophy of never stopping the user from doing something we think is wrong if the API will do the same.
| # Error raised when a feature is not supported by a model | ||
| class UnsupportedFeatureError < Error | ||
| def initialize(message) | ||
| super(nil, message) | ||
| end | ||
| end | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need.
|
|
||
| def read_from_json(file = RubyLLM.config.model_registry_file) | ||
| data = File.exist?(file) ? File.read(file) : '[]' | ||
| data = File.exist?(file) ? File.read(file, encoding: 'UTF-8') : '[]' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why?
Summary
Adds support for Extended Thinking (also known as reasoning) across Anthropic, Gemini, and OpenAI/Grok providers. This feature exposes the model's internal reasoning process, allowing applications to access both the thinking content and the final response.
Usage
Provider Support
thinkingblock withbudget_tokensthinkingConfigwith budget or effort levelreasoning_effortparameterBudget symbols (
:low,:medium,:high) are translated to appropriate provider-specific values. Integer budgets specify token counts directly.Changes
Core:
Message: Addedthinkingand protectedthinking_signatureattributesChat: Addedwith_thinking(budget:)andthinking_enabled?methodsStreamAccumulator: Accumulates thinking content during streamingUnsupportedFeatureError: New error for unsupported feature requestsProviders:
ActiveRecord:
thinkingandthinking_signaturecolumnsChatMethods: Addedwith_thinkingdelegation and persistenceMessageMethods: Extracts thinking attributes into_llmDocumentation:
docs/_core_features/thinking.mdTests:
Type of change
Scope check
Quality check
overcommit --installand all hooks passbundle exec rspec(736 examples, 0 failures)API changes
with_thinking,thinking_enabled?,UnsupportedFeatureError)Related issues
Closes #551